Goto

Collaborating Authors

 high-risk system


TAI Scan Tool: A RAG-Based Tool With Minimalistic Input for Trustworthy AI Self-Assessment

Davvetas, Athanasios, Ziouvelou, Xenia, Dami, Ypatia, Kaponis, Alexios, Giouvanopoulou, Konstantina, Papademas, Michael

arXiv.org Artificial Intelligence

This paper introduces the TAI Scan Tool, a RAG-based TAI self-assessment tool with minimalistic input. The current version of the tool supports the legal TAI assessment, with a particular emphasis on facilitating compliance with the AI Act. It involves a two-step approach with a pre-screening and an assessment phase. The assessment output of the system includes insight regarding the risk-level of the AI system according to the AI Act, while at the same time retrieving relevant articles to aid with compliance and notify on their obligations. Our qualitative evaluation using use-case scenarios yields promising results, correctly predicting risk levels while retrieving relevant articles across three distinct semantic groups. Furthermore, interpretation of results shows that the tool's reasoning relies on comparison with the setting of high-risk systems, a behaviour attributed to their deployment requiring careful consideration, and therefore frequently presented within the AI Act.


From Bias to Accountability: How the EU AI Act Confronts Challenges in European GeoAI Auditing

Matuszczyk, Natalia, Barnes, Craig R., Gupta, Rohit, Ozel, Bulent, Mitra, Aniket

arXiv.org Artificial Intelligence

Bias in geospatial artificial intelligence (GeoAI) models has been documented, yet the evidence is scattered across narrowly focused studies. We synthesize this fragmented literature to provide a concise overview of bias in GeoAI and examine how the EU's Artificial Intelligence Act (EU AI Act) shapes audit obligations. We discuss recurring bias mechanisms, including representation, algorithmic and aggregation bias, and map them to specific provisions of the EU AI Act. By applying the Act's high-risk criteria, we demonstrate that widely deployed GeoAI applications qualify as high-risk systems. We then present examples of recent audits along with an outline of practical methods for detecting bias. As far as we know, this study represents the first integration of GeoAI bias evidence into the EU AI Act context, by identifying high-risk GeoAI systems and mapping bias mechanisms to the Act's Articles. Although the analysis is exploratory, it suggests that even well-curated European datasets should employ routine bias audits before 2027, when the AI Act's high-risk provisions take full effect.


It's complicated. The relationship of algorithmic fairness and non-discrimination regulations in the EU AI Act

Meding, Kristof

arXiv.org Artificial Intelligence

What constitutes a fair decision? This question is not only difficult for humans but becomes more challenging when Artificial Intelligence (AI) models are used. In light of discriminatory algorithmic behaviors, the EU has recently passed the AI Act, which mandates specific rules for AI models, incorporating both traditional legal non-discrimination regulations and machine learning based algorithmic fairness concepts. This paper aims to bridge these two different concepts in the AI Act through: First a high-level introduction of both concepts targeting legal and computer science-oriented scholars, and second an in-depth analysis of the AI Act's relationship between legal non-discrimination regulations and algorithmic fairness. Our analysis reveals three key findings: (1.), most non-discrimination regulations target only high-risk AI systems. (2.), the regulation of high-risk systems encompasses both data input requirements and output monitoring, though these regulations are often inconsistent and raise questions of computational feasibility. (3.) Regulations for General Purpose AI Models, such as Large Language Models that are not simultaneously classified as high-risk systems, currently lack specificity compared to other regulations. Based on these findings, we recommend developing more specific auditing and testing methodologies for AI systems. This paper aims to serve as a foundation for future interdisciplinary collaboration between legal scholars and computer science-oriented machine learning researchers studying discrimination in AI systems.


The Artificial Intelligence Act: critical overview

Silva, Nuno Sousa e

arXiv.org Artificial Intelligence

This article provides a critical overview of the recently approved Artificial Intelligence Act. It starts by presenting the main structure, objectives, and approach of Regulation (EU) 2024/1689. A definition of key concepts follows, and then the material and territorial scope, as well as the timing of application, are analyzed. Although the Regulation does not explicitly set out principles, the main ideas of fairness, accountability, transparency, and equity in AI underly a set of rules of the regulation. This is discussed before looking at the ill-defined set of forbidden AI practices (manipulation and e exploitation of vulnerabilities, social scoring, biometric identification and classification, and predictive policing). It is highlighted that those rules deal with behaviors rather than AI systems. The qualification and regulation of high-risk AI systems are tackled, alongside the obligation of transparency for certain systems, the regulation of general-purpose models, and the rules on certification, supervision, and sanctions. The text concludes that even if the overall framework can be deemed adequate and balanced, the approach is so complex that it risks defeating its own purpose of promoting responsible innovation within the European Union and beyond its borders.


AI Act for the Working Programmer

Hermanns, Holger, Lauber-Rönsberg, Anne, Meinel, Philip, Sterz, Sarah, Zhang, Hanwei

arXiv.org Artificial Intelligence

The European AI Act is a new, legally binding instrument that will enforce certain requirements on the development and use of AI technology potentially affecting people in Europe. It can be expected that the stipulations of the Act, in turn, are going to affect the work of many software engineers, software testers, data engineers, and other professionals across the IT sector in Europe and beyond. The 113 articles, 180 recitals, and 13 annexes that make up the Act cover 144 pages. This paper aims at providing an aid for navigating the Act from the perspective of some professional in the software domain, termed "the working programmer", who feels the need to know about the stipulations of the Act.


The AI Act proposal: a new right to technical interpretability?

Gallese, Chiara

arXiv.org Artificial Intelligence

The debate about the concept of the so called right to explanation in AI is the subject of a wealth of literature. It has focused, in the legal scholarship, on art. 22 GDPR and, in the technical scholarship, on techniques that help explain the output of a certain model (XAI). The purpose of this work is to investigate if the new provisions introduced by the proposal for a Regulation laying down harmonised rules on artificial intelligence (AI Act), in combination with Convention 108 plus and GDPR, are enough to indicate the existence of a right to technical explainability in the EU legal framework and, if not, whether the EU should include it in its current legislation. This is a preliminary work submitted to the online event organised by the Information Society Law Center and it will be later developed into a full paper.


The US unofficial position on upcoming EU Artificial Intelligence rules

#artificialintelligence

The United States is pushing for a narrower Artificial Intelligence definition, a broader exemption for general purpose AI and an individualised risk assessment in the AI Act, according to a document obtained by EURACTIV. The non-paper is dated October 2022 and was sent to targeted government officials in some EU capitals and the European Commission. It follows much of the ideas and wording of the initial feedback sent to EU lawmakers last March. "Many of our comments are prompted by our growing cooperation in this area under the U.S.-EU Trade and Technology Council (TTC) and concerns over whether the proposed Act will support or restrict continued cooperation," the document reads. The document is a reaction to the progress made by the Czech Presidency of the EU Council on the AI regulation last month.


AI Act: EU Parliament's discussions heat up over facial recognition, scope

#artificialintelligence

EU lawmakers held their first political debate on the AI Act on Wednesday (5 October) as the discussion moved to more sensitive topics like the highly debated issue of biometric recognition. The AI Act is a landmark EU legislation intended to regulate Artificial Intelligence introducing a series of obligations proportional to the potential harm of the technologies' applications. So far, the co-rapporteurs of the European Parliament, the social democrat Brando Benifei and the liberal Dragoș Tudorache, have limited the discussion to the more technical aspects, hoping to build momentum before addressing the more political hurdles. This approach was not without its successes since the file progressed in several parts. In the meeting, the MEPs formally agreed on the first two batches of compromises on administrative procedures, conformity assessment, standards, and certificates.


LEAK: Commission to propose rebuttable presumption for AI-related damages

#artificialintelligence

The European Commission will present a liability regime targeted to damage originating from Artificial Intelligence (AI) that would put causality presumption on the defendant, according to a draft obtained by EURACTIV. The AI Liability Directive is scheduled to be published on 28 September, and it is meant to complement the Artificial Intelligence Act, an upcoming regulation that introduces requirements for AI systems based on their level of risk. "This directive provides in a very targeted and proportionate manner alleviations of the burden of proof through the use of disclosure and rebuttable presumptions," the draft reads. "These measures will help persons seeking compensation for damage caused by AI systems to handle their burden of proof so that justified liability claims can be successful ." The proposal follows the European Parliament's own-initiative resolution adopted in October 2020 that called for facilitating the burden of proof and a strict liability regime for AI-enabled technologies.


French Presidency pushes for alignment with the new legislative framework in AI Act

#artificialintelligence

France is proposing several changes to the Artificial Intelligence (AI) Act to ensure better alignment with the new legislative framework, the EU's legislation that regulates market surveillance and conformity assessment procedures. The changes also relate to the designation of competent authorities and the high-risk AI database. The French Presidency, which leads the work in the EU Council, shared a new compromise text on Monday (25 April) that will be discussed with the representatives of the other member states at the telecom working party on Thursday. Notified bodies will play a crucial role in the enforcement of the AI Act, as they will be designated by EU countries to assess the conformity of the AI systems to EU rules before they are launched on the market. The new text refers explicitly to the EU regulation setting up the requirements for accreditation and market surveillance, and a reference that such bodies will have to respect confidentiality obligations has been added.